Deep learning: Challenges in learning and generalization Artificial neural networks became very popular in the research community during the last decade. In particular, recurrent networks are widely known for their excellent performance on sequential tasks that involve language. However, their ability to learn and generalize is often confused with their memorization capability. Recurrent networks are often used as black boxes today, with little insight into what they can actually learn. In this talk, I will try to explain some of the current limitations of deep learning, and show some basic problems where the common neural architectures fail to properly learn. I will finish the talk with a discussion about neural networks augmented with a memory, and will describe ideas on how we may overcome reliance on supervised learning to improve generalization performance.